1
提示工程:生成式AI的主要介面
AI011Lesson 2
00:00

提示工程基礎

提示工程(PE) 是設計並優化文字輸入,以引導大型語言模型(LLMs)產生高品質、一致結果的過程。

1. 定義介面

什麼: 它作為生成式AI的主要「程式設計」介面。
原因: 它將互動從原始且不可預測的文字預測,轉變為有意識、結構化的指令執行。

2. 模型基礎

  • 基礎大語言模型(LLMs): 僅訓練以根據龐大資料集中的統計關係預測下一個詞元,最大化機率 $P(w_t | w_1, w_2, ..., w_{t-1})$。
  • 指令微調大語言模型: 透過人類反饋的強化學習(RLHF)進行微調,以明確遵循特定指示,並扮演有助益的助理角色。

3. 成功提示的組成要素

如何: 一個穩健的提示通常包含:

  • 指令: 所需的具體動作。
  • 主要內容: 需要處理的目標資料。
  • 次要內容: 參數、格式或限制條件(用於解決隨機性與幻覺問題)。
詞元化現實
模型不會閱讀單字;它們處理的是 詞元——用於計算統計機率的較小文字序列單位。
prompt_structure.py
TERMINALbash — 80x24
> Ready. Click "Run" to execute.
>
Question 1
What is the primary difference between a Base LLM and an Instruction-Tuned LLM?
Base LLMs only process code, while Instruction-Tuned LLMs process natural language.
Instruction-Tuned models are refined through human feedback to follow specific directions, whereas Base LLMs focus on statistical token prediction.
Base LLMs use tokens, but Instruction-Tuned LLMs read whole words at a time.
There is no difference; they are two terms for the exact same architecture.
Question 2
Why is the use of delimiters (like triple backticks or hashes) considered a best practice in prompt engineering?
They reduce the token count, making the API call cheaper.
They force the model to output in JSON format.
To separate instructions from the content the model needs to process, preventing 'separation of concerns' issues.
They increase the model's temperature setting automatically.
Challenge: Tutor AI Constraints
Refining prompts for educational safety.
You are building a tutor-style AI for a startup. The model is currently giving away answers too quickly and sometimes making up facts when it doesn't know the answer.
AI Tutor Interface
Task 1
Implement "Chain-of-thought" prompting in the system message to prevent the AI from giving away answers immediately.
Solution:
Instruct the model to: "Work through the problem step-by-step before providing the final answer. Do not reveal the final answer until the student has attempted the steps."
Task 2
Apply an "out" to prevent fabrications (hallucinations) when the AI doesn't know the answer.
Solution:
Add the explicit instruction: "If you do not know the answer based on the provided text or standard curriculum, state clearly that you do not know."